290 research outputs found

    Efficient Model-Based Reconstruction for Dynamic MRI

    Full text link
    Dynamic magnetic resonance imaging (MRI) has important clinical and neuro- science applications (e.g., cardiac disease diagnosis, neurological behavior studies). It captures an object in motion by acquiring data across time, then reconstructing a sequence of images from them. This dissertation considers efficient dynamic MRI reconstruction using handcrafted models, to achieve fast imaging with high spatial and temporal resolution. Our modeling framework considers data acquisition process, image properties, and artifact correction. The reconstruction model expressed as a large-scale inverse problem requires optimization algorithms to solve, and we consider efficient implementations that make use of underlying problem structures. In the context of dynamic MRI reconstruction, we investigate efficient updates in two frameworks of algorithms for solving a nonsmooth composite convex optimization problem for the low-rank plus sparse (L+S) model. In the proximal gradient framework, current algorithms for the L+S model involve the classical iterative soft thresholding algorithm (ISTA); we consider two accelerated alternatives, one based on the fast iterative shrinkage-thresholding algorithm (FISTA), and the other with the recent proximal optimized gradient method (POGM). In the augmented Lagrangian (AL) framework, we propose an efficient variable splitting scheme based on the form of the data acquisition operator, leading to simpler computation than the conjugate gradient (CG) approach required by existing AL methods. Numerical results suggest faster convergence of our efficient implementations in both frameworks, with POGM providing the fastest convergence overall and the practical benefit of being free of algorithm tuning parameters. In the context of magnetic field inhomogeneity correction, we present an efficient algorithm for a regularized field inhomogeneity estimation problem. Most existing minimization techniques are computationally or memory intensive for 3D datasets, and are designed for single-coil MRI. We consider 3D MRI with optional consideration of coil sensitivity and a generalized expression that addresses both multi-echo field map estimation and water-fat imaging. Our efficient algorithm uses a preconditioned nonlinear conjugate gradient method based on an incomplete Cholesky factorization of the Hessian of the cost function, along with a monotonic line search. Numerical experiments show the computational advantage of the proposed algorithm over state- of-the-art methods with similar memory requirements. In the context of task-based functional MRI (fMRI) reconstruction, we introduce a space-time model that represents an fMRI timeseries as a sum of task-correlated signal and non-task background. Our model consists of a spatiotemporal decomposition based on assumptions of the activation waveform shape, with spatial and temporal smoothness regularization on the magnitude and phase of the timeseries. Compared with two contemporary task fMRI decomposition models, our proposed model yields better timeseries and activation maps on simulated and human subject fMRI datasets with multiple tasks. The above examples are part of a larger framework for model-based dynamic MRI reconstruction. This dissertation concludes by presenting a general framework with flexibility on model assumptions and artifact compensation options (e.g., field inhomogeneity, head motion), and proposing future work ideas on both the framework and its connection to data acquisition.PHDApplied and Interdisciplinary MathematicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/168081/1/yilinlin_1.pd

    Computer Simulation of Bioprocess

    Get PDF
    Bioprocess optimization is important in order to make the bioproduction process more efficient and economic. The conventional optimization methods are costly and less efficient. On the other hand, modeling and computer simulation can reveal the mechanisms behind the phenomenon to some extent, to assist the deep analysis and efficient optimization of bioprocesses. In this chapter, modeling and computer simulation of microbial growth and metabolism kinetics, bioreactor dynamics, bioreactor feedback control will be made to show the application methods and the usefulness of modeling and computer simulation methods in optimization of the bioprocess technology

    ChatAnything: Facetime Chat with LLM-Enhanced Personas

    Full text link
    In this technical report, we target generating anthropomorphized personas for LLM-based characters in an online manner, including visual appearance, personality and tones, with only text descriptions. To achieve this, we first leverage the in-context learning capability of LLMs for personality generation by carefully designing a set of system prompts. We then propose two novel concepts: the mixture of voices (MoV) and the mixture of diffusers (MoD) for diverse voice and appearance generation. For MoV, we utilize the text-to-speech (TTS) algorithms with a variety of pre-defined tones and select the most matching one based on the user-provided text description automatically. For MoD, we combine the recent popular text-to-image generation techniques and talking head algorithms to streamline the process of generating talking objects. We termed the whole framework as ChatAnything. With it, users could be able to animate anything with any personas that are anthropomorphic using just a few text inputs. However, we have observed that the anthropomorphic objects produced by current generative models are often undetectable by pre-trained face landmark detectors, leading to failure of the face motion generation, even if these faces possess human-like appearances because those images are nearly seen during the training (e.g., OOD samples). To address this issue, we incorporate pixel-level guidance to infuse human face landmarks during the image generation phase. To benchmark these metrics, we have built an evaluation dataset. Based on it, we verify that the detection rate of the face landmark is significantly increased from 57.0% to 92.5% thus allowing automatic face animation based on generated speech content. The code and more results can be found at https://chatanything.github.io/

    Online Map Vectorization for Autonomous Driving: A Rasterization Perspective

    Full text link
    Vectorized high-definition (HD) map is essential for autonomous driving, providing detailed and precise environmental information for advanced perception and planning. However, current map vectorization methods often exhibit deviations, and the existing evaluation metric for map vectorization lacks sufficient sensitivity to detect these deviations. To address these limitations, we propose integrating the philosophy of rasterization into map vectorization. Specifically, we introduce a new rasterization-based evaluation metric, which has superior sensitivity and is better suited to real-world autonomous driving scenarios. Furthermore, we propose MapVR (Map Vectorization via Rasterization), a novel framework that applies differentiable rasterization to vectorized outputs and then performs precise and geometry-aware supervision on rasterized HD maps. Notably, MapVR designs tailored rasterization strategies for various geometric shapes, enabling effective adaptation to a wide range of map elements. Experiments show that incorporating rasterization into map vectorization greatly enhances performance with no extra computational cost during inference, leading to more accurate map perception and ultimately promoting safer autonomous driving.Comment: [NeurIPS 2023
    corecore